Avatar

Vineeth Sai Narajala

AI Security Researcher

AI Software and Platform

Vineeth Sai is a Senior Technical Lead for AI Security Research at Cisco, where he leads initiatives to secure AI systems across the company's networking, security, and infrastructure products. His expertise spans AI security, including model safety guardrails, prompt-injection protections, compute isolation, and secure token management. Prior to Cisco, Vineeth served as a Senior Generative AI Security Engineer at Amazon Web Services (AWS) and Senior Security Engineer at Meta. An active researcher, Vineeth has published several peer-reviewed papers on AI security, including groundbreaking work on securing agentic AI systems. Beyond his corporate roles, Vineeth serves as Co-Lead of the OWASP AIVSS and Workstream Co-Lead of GenAI Security Project's Agentic Application Security initiative, where he advances security standards within the open-source community. Through OWASP, he authors white papers, develops comprehensive threat modeling guides for multi-agentic systems, and helps establish industry-wide security best practices for generative AI applications. A recognized thought leader in AI security, Vineeth regularly presents at major security conferences including RSA San Francisco, OWASP Global AppSec Boston, BSides Harrisburg/Austin/Seattle/Baltimore, and CypherCon Milwaukee. With deep technical expertise and a strong research foundation at the intersection of AI and cybersecurity, Vineeth focuses on practical security solutions that protect AI systems at scale.

Articles

Your Model’s Memory Has Been Compromised: Adversarial Hubness in RAG Systems

3 min read

Prompt injections and jailbreaks remain a major concern for AI security, and for good reason: models remain susceptible to users tricking models into doing or saying things like bypassing guardrails or leaking system prompts. But AI deployments don’t just process prompts at inference time (meaning when you are actively querying the model): they may also retrieve, rank, and synthesize external data in real time. Each of those steps is a potential adversarial entry point.

Personal AI Agents like OpenClaw Are a Security Nightmare

4 min read

This blog is written in collaboration by Amy Chang, Vineeth Sai Narajala, and Idan Habler Over the past few weeks, Clawdbot (then renamed Moltbot, later renamed OpenClaw) has achieved virality as an open source, self-hosted personal AI assistant agent that runs locally and executes actions on the user’s behalf. The bot’s explosive rise is driven by […]

Securing AI Agents with Cisco’s Open-Source A2A Scanner

3 min read

The Rise of Agent Networks: A New Security Frontier  Emerging Agent-to-Agent (A2A) frameworks have emerged to support organizations as they move from isolated AI applications to interconnected networks of autonomous agents. A2A enables software agents to discover, authenticate, and collaborate across organizational boundaries, unlocks unprecedented automation capabilities. A2A also introduces an expanded attack surface, and […]